50 research outputs found

    Towards integrated urban simulations

    Get PDF
    More than half of the world population lives in urban areas. Urbanites are estimated to grow up to 68% of the population by 2050 [1]. This rapid growth requires new contributions from researchers and policy-makers to the development of the future city. Again, understanding how the city will grow is a crucial step in guiding this process towards the best outcome. Cities are highly complex systems that traditional urban dynamic simulations cannot grasp in their totality, if solved only in a lightly coupled way. In addition, a model is useful only if it can be used in the planning and management practice [2]. It’s true that, driven by the urge to improve their models, different sectors are developing multi-layered integrated simulations. Nevertheless, a wider scope of considering the city in its holistic behaviour is missing. Indeed, management, social, and technical barriers restrain the adoption of integrated models, such as ‘model complexity, user friendliness, administrative fragmentation and communication’ [3]

    A Second-Order Distributed Trotter-Suzuki Solver with a Hybrid Kernel

    Full text link
    The Trotter-Suzuki approximation leads to an efficient algorithm for solving the time-dependent Schr\"odinger equation. Using existing highly optimized CPU and GPU kernels, we developed a distributed version of the algorithm that runs efficiently on a cluster. Our implementation also improves single node performance, and is able to use multiple GPUs within a node. The scaling is close to linear using the CPU kernels, whereas the efficiency of GPU kernels improve with larger matrices. We also introduce a hybrid kernel that simultaneously uses multicore CPUs and GPUs in a distributed system. This kernel is shown to be efficient when the matrix size would not fit in the GPU memory. Larger quantum systems scale especially well with a high number nodes. The code is available under an open source license.Comment: 11 pages, 10 figure

    Universal decoherence induced by an environmental quantum phase transition

    Get PDF
    Decoherence induced by coupling a system with an environment may display universal features. Here we demostrate that when the coupling to the system drives a quantum phase transition in the environment, the temporal decay of quantum coherences in the system is Gaussian with a width independent of the system-environment coupling strength. The existence of this effect opens the way for a new type of quantum simulation algorithm, where a single qubit is used to detect a quantum phase transition. We discuss possible implementations of such algorithm and we relate our results to available data on universal decoherence in NMR echo experiments

    The Loschmidt echo in classically chaotic systems :

    Get PDF
    Tesis (Doctorado en Física)--Universidad Nacional de Córdoba, 2004.El eco de Loschmidt (LE) es una medida de la sensibilidad de la mecánica cuántica a las perturbaciones en el operador de evolución. En esta tesis se estudia en sistemas que tienen un contraparte clásico con estabilidad dinámica, es decir, clasicamente caóticos.Se presenta un tratamiento analítico, se demuestra que en cierto regimen de los parámetros, el LE decae exponencialmente mostrando algunos ejemplos particularmente interesantes. Posteriormente se muestran estudios numéricos que apoyan los resultados analíticos. Más aun, se exploran regímenes que no son accesibles por la teoría, demostrando que el LE y su régimen de Lyapunov presentan la misma universalidad que se ascribe al caos clásico. Finalmente, se explora la relación entre el LE y la transición cuántico-clásica, en particular con la teoría de la decoherencia. Usando dos enfoques diferentes, una aproximación semiclásica a la función de Wigner y una ecuación maestra para el LE, se muestra que la tasa de decoherencia y la tasa de decaimiento del LE son iguales.Fernando Martín Cucchietti

    Measuring the Effectiveness of Static Maps to Communicate Changes over Time

    Full text link
    Both in digital and print media, it is common to use static maps to show the evolution of values in various regions over time. The ability to communicate local or global trends, while reducing the cognitive load on readers, is of vital importance for an audience that is not always well versed in map interpretation. This study aims to measure the efficiency of four static maps (choropleth, tile grid map and their banded versions) to test their usefulness in presenting changes over time from a user experience perspective. We first evaluate the effectiveness of these map types by quantitative performance analysis (time and success rates). In a second phase, we gather qualitative data to detect which type of map favors decision-making. On a quantitative level, our results show that certain types of maps work better to show global trends, while other types are more useful when analyzing regional trends or detecting the regions that fit a specific pattern. On a qualitative level, those representations which are already familiar to the user are often better valued despite having lower measured success rate

    Introducing polyglot-based data-flow awareness to time-series data stores

    Get PDF
    The rising interest in extracting value from data has led to a broad proliferation of monitoring infrastructures, most notably composed by sensors, intended to collect this new oil. Thus, gathering data has become fundamental for a great number of applications, such as predictive maintenance techniques or anomaly detection algorithms. However, before data can be refined into insights and knowledge, it has to be efficiently stored and prepared for its later retrieval. As a consequence of this sensor and IoT boom, Time-Series databases (TSDB), designed to manage sensor data, became the fastest-growing database category since 2019. Here we propose a holistic approach intended to improve TSDB’s performance and efficiency. More precisely, we introduce and evaluate a novel polyglot-based approximation, aimed to tailor the data store, not only to time-series data –as it is done conventionally– but also to the data flow itself: From its ingestion, until its retrieval. In order to evaluate the approach, we materialize it in an alternative implementation of NagareDB, a resource-efficient time-series database, based on MongoDB, in turn, the most popular NoSQL storage solution. After implementing our approach into the database, we observe a global speed up, solving queries up to 12 times faster than MongoDB’s recently launched Time-series capability, as well as generally outperforming InfluxDB, the most popular time-series database. Our polyglot-based data-flow aware solution can ingest data more than two times faster than MongoDB, InfluxDB, and NagareDB’s original implementation, while using the same disk space as InfluxDB, and half of the requested by MongoDB.This research was partly supported by the Spanish Ministry of Science and Innovation (contract PID2019-107255GB) and by the Generalitat de Catalunya (contract 2017-SGR-1414).Peer ReviewedPostprint (published version

    A compromise archive platform for monitoring infrastructures

    Get PDF
    The great advancement in the technological field has led to an explosion in the amount of generated data. Many different sectors have understood the opportunity that acquiring, storing, and analyzing further information means, which has led to a broad proliferation of measurement devices. Those sensors’ typical job is to monitor the state of the enterprise ecosystem, which can range from a traditional factory, to a commercial mall, or even to the largest experiment on Earth[1]. Big enterprises (BEs) are building their own big data architectures, usually made out of a combination of several state-of-the-art technologies. Finding new interesting data to measure, store and analyze, has become a daily process in the industrial field. However, small and medium-sized enterprises (SMEs) usually lack the resources needed to build those data handling architectures, not just in terms of hardware resources, but also in terms of contracting personnel who can master all those rapidly evolving technologies. Our research tries to adapt two world-wide-used technologies into a single but elastic and moldable one, by tuning them, to offer an alternative and efficient solution for this very specific, but common, scenario

    Dynamics of the Bose-Hubbard model: transition from Mott insulator to superfluid

    Full text link
    We study the dynamics of phase transitions in the one dimensional Bose-Hubbard model. To drive the system from Mott insulator to superfluid phase, we change the tunneling frequency at a finite rate. We investigate the build up of correlations during fast and slow transitions using variational wave functions, dynamical Bogoliubov theory, Kibble-Zurek mechanism, and numerical simulations. We show that time-dependent correlations satisfy characteristic scaling relations that can be measured in optical lattices filled with cold atoms.Comment: 7+ pages, the Bogolubov section is completely rewritte

    A holistic scalability strategy for time series databases following cascading polyglot persistence

    Get PDF
    Time series databases aim to handle big amounts of data in a fast way, both when introducing new data to the system, and when retrieving it later on. However, depending on the scenario in which these databases participate, reducing the number of requested resources becomes a further requirement. Following this goal, NagareDB and its Cascading Polyglot Persistence approach were born. They were not just intended to provide a fast time series solution, but also to find a great cost-efficiency balance. However, although they provided outstanding results, they lacked a natural way of scaling out in a cluster fashion. Consequently, monolithic approaches could extract the maximum value from the solution but distributed ones had to rely on general scalability approaches. In this research, we proposed a holistic approach specially tailored for databases following Cascading Polyglot Persistence to further maximize its inherent resource-saving goals. The proposed approach reduced the cluster size by 33%, in a setup with just three ingestion nodes and up to 50% in a setup with 10 ingestion nodes. Moreover, the evaluation shows that our scaling method is able to provide efficient cluster growth, offering scalability speedups greater than 85% in comparison to a theoretically 100% perfect scaling, while also ensuring data safety via data replication.This research was partly supported by the Grant Agreement No. 857191, by the Spanish Ministry of Science and Innovation (contract PID2019-107255GB) and by the Generalitat de Catalunya (contract 2017-SGR-1414).Peer ReviewedPostprint (published version
    corecore